INTERSPEECH.2009 - Language and Multimodal

Total: 111

#1 Factor analysis and SVM for language recognition [PDF] [Copy] [Kimi1]

Authors: Florian Verdet ; Driss Matrouf ; Jean-François Bonastre ; Jean Hennebert

Statistic classifiers operate on features that generally include both, useful and useless information. These two types of information are difficult to separate in feature domain. Recently, a new paradigm based on Factor Analysis (FA) proposed a model decomposition into useful and useless components. This method has successfully been applied to speaker recognition tasks. In this paper, we study the use of FA for language recognition. We propose a classification method based on SDC features and Gaussian Mixture Models (GMM). We present well performing systems using Factor Analysis and FA-based Support Vector Machine (SVM) classifiers. Experiments are conducted using NIST LRE 2005’s primary condition. The relative equal error rate reduction obtained by the best factor analysis configuration with respect to baseline GMM-UBM system is over 60%, corresponding to an EER of 6.59%.

#2 Exploring universal attribute characterization of spoken languages for spoken language recognition [PDF] [Copy] [Kimi1]

Authors: Sabato Marco Siniscalchi ; Jeremy Reed ; Torbjørn Svendsen ; Chin-Hui Lee

We propose a novel universal acoustic characterization approach to spoken language identification (LID), in which any spoken language is described with a common set of fundamental units defined “universally.” Specifically, manner and place of articulation form this unit inventory and are used to build a set of universal attribute models with data-driven techniques. Using the vector space modeling approaches to LID a spoken utterance is first decoded into a sequence of attributes. Then, a feature vector consisting of co-occurrence statistics of attribute units is created, and the final LID decision is implemented with a set of vector space language classifiers. Although the present study is just in its preliminary stage, promising results comparable to acoustically rich phone-based LID systems have already been obtained on the NIST 2003 LID task. The results provide clear insight for further performance improvements and encourage a continuing exploration of the proposed framework.

#3 On the use of phonological features for automatic accent analysis [PDF] [Copy] [Kimi1]

Authors: Abhijeet Sangwan ; John H. L. Hansen

In this paper, we present an automatic accent analysis system that is based on phonological features (PFs). The proposed system exploits the knowledge of articulation embedded in phonology by rapidly build Markov models (MMs) of PFs extracted from accented speech. The Markov models capture information in the PF space along two dimensions of articulation: PF state-transitions and state-durations. Furthermore, by utilizing MMs of native and non-native accents a new statistical measure of “accentedness” is developed which rates the articulation of a word on a scale of native-like (-1) to non-native like (+1). The proposed methodology is then used to perform an automatic cross-sectional study of accented English spoken by native speakers of Mandarin Chinese (N-MC). The experimental results demonstrate the capability of the proposed system to rapidly perform quantitative as well as qualitative analysis of foreign accents. The work developed in this paper is easily assimilated into language learning systems, and has impact in the areas of speaker and speech recognition.

#4 Language recognition using language factors [PDF] [Copy] [Kimi1]

Authors: Fabio Castaldo ; Sandro Cumani ; Pietro Laface ; Daniele Colibro

Language recognition systems based on acoustic models reach state of the art performance using discriminative training techniques.

#5 Automatic accent detection: effect of base units and boundary information [PDF] [Copy] [Kimi1]

Authors: Je Hun Jeon ; Yang Liu

Automatic prominence or pitch accent detection is important as it can perform automatic prosodic annotation of speech corpora, as well as provide additional features in other tasks such as keyword detection. In this paper, we evaluate how accent detection performance changes according to different base units and what kind of boundary information is available. We compare word, syllable, and vowel-based units when their boundaries are provided. We also automatically estimate syllable boundaries using energy contours when phone-level alignment is available. In addition, we utilize a sliding window with fixed length under the condition of unknown boundaries. Our experiments show that when boundary information is available, using longer base unit achieves better performance. In the case of no boundary information, using a moving window with a fixed size achieves similar performance to using syllable information on word-level evaluation, suggesting that accent detection can be performed without relying on a speech recognizer to generate boundaries.

#6 Age verification using a hybrid speech processing approach [PDF] [Copy] [Kimi1]

Authors: Ron M. Hecht ; Omer Hezroni ; Amit Manna ; Ruth Aloni-Lavi ; Gil Dobry ; Amir Alfandary ; Yaniv Zigel

The human speech production system is a multi-level system. On the upper level, it starts with information that one wants to transmit. It ends on the lower level with the materialization of the information into a speech signal. Most of the recent work conducted in age estimation is focused on the lower-acoustic level. In this research the upper lexical level information is utilized for age-group verification and it is shown that one’s vocabulary reflects one’s age. Several age-group verification systems that are based on automatic transcripts are proposed. In addition, a hybrid approach is introduced, an approach that combines the word-based system and an acoustic-based system. Experiments were conducted on a four age-groups verification task using the Fisher corpora, where an average equal error rate (EER) of 28.7% was achieved using the lexical-based approach and 28.0% using an acoustic approach. By merging these two approaches the verification error was reduced to 24.1%.

#7 Information bottleneck based age verification [PDF] [Copy] [Kimi1]

Authors: Ron M. Hecht ; Omer Hezroni ; Amit Manna ; Gil Dobry ; Yaniv Zigel ; Naftali Tishby

Word N-gram models can be used for word-based age-group verification. In this paper the agglomerative information bottleneck (AIB) approach is used to tackle one of the most fundamental drawbacks of word N-gram models: its abundant amount of irrelevant information. It is demonstrated that irrelevant information can be omitted by joining words to form word-clusters; this provides a mechanism to transform any sequence of words to a sequence of word-cluster labels. Consequently, word N-gram models are converted to word-cluster N-gram models which are more compact. Age verification experiments were conducted on the Fisher corpora. Their goal was to verify the age-group of the speaker of an unknown speech segment. In these experiments an N-gram model was compressed to a fifth of its original size without reducing the verification performance. In addition, a verification accuracy improvement is demonstrated by disposing irrelevant information.

#8 Discriminative n-gram selection for dialect recognition [PDF] [Copy] [Kimi1]

Authors: F. S. Richardson ; W. M. Campbell ; P. A. Torres-Carrasquillo

Dialect recognition is a challenging and multifaceted problem. Distinguishing between dialects can rely upon many tiers of interpretation of speech data — e.g., prosodic, phonetic, spectral, and word. High-accuracy automatic methods for dialect recognition typically use either phonetic or spectral characteristics of the input. A challenge with spectral system, such as those based on shifted-delta cepstral coefficients, is that they achieve good performance but do not provide insight into distinctive dialect features. In this work, a novel method based upon discriminative training and phone N-grams is proposed. This approach achieves excellent classification performance, fuses well with other systems, and has interpretable dialect characteristics in the phonetic tier. The method is demonstrated on data from the LDC and prior NIST language recognition evaluations. The method is also combined with spectral methods to demonstrate state-of-the-art performance in dialect recognition.

#9 Data-driven phonetic comparison and conversion between south african, british and american English pronunciations [PDF] [Copy] [Kimi1]

Authors: Linsen Loots ; Thomas Niesler

We analyse pronunciations in American, British and South African English pronunciation dictionaries. Three analyses are performed. First the accuracy is determined with which decision tree based grapheme-to-phoneme (G2P) conversion can be applied to each accent. It is found that there is little difference between the accents in this regard. Secondly, pronunciations are compared by performing pairwise alignments between the accents. Here we find that South African English pronunciation most closely matches British English. Finally, we apply decision trees to the conversion of pronunciations from one accent to another. We find that pronunciations of unknown words can be more accurately determined from a known pronunciation in a different accent than by means of G2P methods. This has important implications for the development of pronunciation dictionaries in less-resourced varieties of English, and hence also for the development of ASR systems.

#10 Target-aware language models for spoken language recognition [PDF] [Copy] [Kimi1]

Authors: Rong Tong ; Bin Ma ; Haizhou Li ; Eng Siong Chng ; Kong-Aik Lee

This paper studies a new way of constructing multiple phone tokenizers for language recognition. In this approach, each phone tokenizer for a target language will share a common set of acoustic models, while each tokenizer will have a unique phone-based language model (LM) trained for a specific target language. The target-aware language models (TALM) are constructed to capture the discriminative ability of individual phones for the desired target languages. The parallel phone tokenizers thus formed are shown to achieve better performance than the original phone recognizer. The proposed TALM is very different from the LM in the traditional PPRLM technique. First of all, the TALM applies the LM information in the front-end as opposed to PPRLM approach which uses a LM in the system back-end; Furthermore, the TALM exploits the discriminative phones occurrence statistics, which are different from the traditional n-gram statistics in PPRLM approach. A novel way of training TALM is also studied in this paper. Our experimental results show that the proposed method consistently improves the language recognition performance on NIST 1996, 2003 and 2007 LRE 30-second closed test sets.

#11 Language identification for speech-to-speech translation [PDF] [Copy] [Kimi1]

Authors: Daniel Chung Yong Lim ; Ian Lane

This paper investigates the use of language identification (LID) in real-time speech-to-speech translation systems. We propose a framework that incorporates LID capability into a speech-tospeech translation system while minimizing the impact on the system’s real-time performance. We compared two phone-based LID approaches, namely PRLM and PPRLM, to a proposed extended approach based on Conditional Random Field classifiers. The performances of these three approaches were evaluated to identify the input language in the CMU English-Iraqi TransTAC system, and the proposed approach obtained significantly higher classification accuracies on two of the three test sets evaluated.

#12 Using prosody and phonotactics in Arabic dialect identification [PDF] [Copy] [Kimi1]

Authors: Fadi Biadsy ; Julia Hirschberg

While Modern Standard Arabic is the formal spoken and written language of the Arab world, dialects are the major communication mode for everyday life; identifying a speaker’s dialect is thus critical to speech processing tasks such as automatic speech recognition, as well as speaker identification. We examine the role of prosodic features (intonation and rhythm) across four Arabic dialects: Gulf, Iraqi, Levantine, and Egyptian, for the purpose of automatic dialect identification. We show that prosodic features can significantly improve identification, over a purely phonotactic-based approach, with an identification accuracy of 86.33% for 2m utterances.

#13 Efficient combination of confidence measures for machine translation [PDF] [Copy] [Kimi1]

Authors: Sylvain Raybaud ; David Langlois ; Kamel Smaïli

We present in this paper a twofold contribution to Confidence Measures for Machine Translation. First, in order to train and test confidence measures, we present a method to automatically build corpora containing realistic errors. Errors introduced into reference translation simulate classical machine translation errors (word deletion and word substitution), and are supervised by Wordnet. Second, we use SVM to combine original and classical confidence measures both at word- and sentence-level. We show that the obtained combination outperforms by 14% (absolute) our best single word-level confidence measure, and that combination of sentence-level confidence measures produces meaningful scores.

#14 Incremental dialog clustering for speech-to-speech translation [PDF] [Copy] [Kimi1]

Authors: David Stallard ; Stavros Tsakalidis ; Shirin Saleem

Application domains for speech-to-speech translation and dialog systems often contain sub-domains and/or task-types for which different outputs are appropriate for a given input. It would be useful to be able to automatically find such sub-domain structure in training corpora, and to classify new interactions with the system into one of these sub-domains. To this end, We present a document-clustering approach to such sub-domain classification, which uses a recently-developed algorithm based on von Mises Fisher distributions. We give preliminary perplexity reduction and MT performance results for a speech-to-speech translation system using this model.

#15 Iterative sentence-pair extraction from quasi-parallel corpora for machine translation [PDF] [Copy] [Kimi1]

Authors: R. Sarikaya ; Sameer Maskey ; R. Zhang ; Ea-Ee Jan ; D. Wang ; Bhuvana Ramabhadran ; S. Roukos

This paper addresses parallel data extraction from the quasiparallel corpora generated in a crowd-sourcing project where ordinary people watch tv shows and movies and transcribe/translate what they hear, creating document pools in different languages. Since they do not have guidelines for naming and performing translations, it is often not clear which documents are the translations of the same show/movie and which sentences are the translations of the each other in a given document pair. We introduce a method for automatically pairing documents in two languages and extracting parallel sentences from the paired documents. The method consists of three steps: i) document pairing, ii) sentence pair alignment of the paired documents, and iii) context extrapolation to boost the sentence pair coverage. Human evaluation of the extracted data shows that 95% of the extracted sentences carry useful information for translation. Experimental results also show that using the extracted data provides significant gains over the baseline statistical machine translation system built with manually annotated data.

#16 RTTS: towards enterprise-level real-time speech transcription and translation services [PDF] [Copy] [Kimi1]

Authors: Juan M. Huerta ; Cheng Wu ; Andrej Sakrajda ; Sasha Caskey ; Ea-Ee Jan ; Alexander Faisman ; Shai Ben-David ; Wen Liu ; Antonio Lee ; Osamuyimen Stewart ; Michael Frissora ; David Lubensky

In this paper we describe the RTTS system for enterprise-level real time speech recognition and translation. RTTS follows a Web Service-based approach which allows the encapsulation of ASR and MT Technology components thus hiding the configuration and tuning complexities and details from the client applications while exposing a uniform interface. In this way, RTTS is capable of easily supporting a wide variety of client applications. The clients we have implemented include a VoIP-based real time speech-to-speech translation system, a chat and Instant Messaging translation System, a Transcription Server, among others.

#17 Using syntax in large-scale audio document translation [PDF] [Copy] [Kimi1]

Authors: Jing Zheng ; Necip Fazil Ayan ; Wen Wang ; David Burkett

Recently, the use of syntax has very effectively improved machine translation (MT) quality in many text translation tasks. However, using syntax in speech translation poses additional challenges because of disfluencies and other spoken language phenomena, and of errors introduced by automatic speech recognition (ASR). In this paper, we investigate the effect of using syntax in a large-scale audio document translation task targeting broadcast news and broadcast conversations. We do so by comparing the performance of three synchronous context-free grammar based translation approaches: 1) hierarchical phrase-based translation, 2) syntaxaugmented MT, and 3) string-to-dependency MT. The results show a positive effect of explicitly using syntax when translating broadcast news, but no benefit when translating broadcast conversations. The results indicate that improving the robustness of syntactic systems against conversational language style is important to their success and requires future effort.

#18 Context-driven automatic bilingual movie subtitle alignment [PDF] [Copy] [Kimi1]

Authors: Andreas Tsiartas ; Prasanta Kumar Ghosh ; Panayiotis G. Georgiou ; Shrikanth S. Narayanan

Movie subtitle alignment is a potentially useful approach for deriving automatically parallel bilingual/multilingual spoken language data for automatic speech translation. In this paper, we consider the movie subtitle alignment task. We propose a distance metric between utterances of different languages based on lexical features derived from bilingual dictionaries. We use the dynamic time warping algorithm to obtain the best alignment. The best F-score of ~0.713 is obtained using the proposed approach.

#20 Evaluating parameters for mapping adult vowels to imitative babbling [PDF] [Copy] [Kimi1]

Authors: Ilana Heintz ; Mary Beckman ; Eric Fosler-Lussier ; Lucie Ménard

We design a neural network model of first language acquisition to explore the relationship between child and adult speech sounds. The model learns simple vowel categories using a produce-andperceive babbling algorithm in addition to listening to ambient speech. The model is similar to that of Westermann & Miranda (2004), but adds a dynamic aspect in that it adapts in both the articulatory and acoustic domains to changes in the child’s speech patterns. The training data is designed to replicate infant speech sounds and articulatory configurations. By exploring a range of articulatory and acoustic dimensions, we see how the child might learn to draw correspondences between his or her own speech and that of a caretaker, whose productions are quite different from the child’s. We also design an imitation evaluation paradigm that gives insight into the strengths and weaknesses of the model.

#21 Intonation of Japanese sentences spoken by English speakers [PDF] [Copy] [Kimi1]

Author: Chiharu Tsurutani

This study investigated intonation of Japanese sentences spoken by Australian English speakers and the influence of their first language (L1) prosody on their intonation of Japanese sentences. The second language (L2) intonation is a complicated product of the L1 transfer at two levels of prosodic hierarchy: at word level and at phrase levels. L2 speech is hypothesized to retain the characteristics of L1, and to gain marked features of the target language only during the late stage of acquisition. Investigation of this hypothesis involved acoustic measurement of L2 speakers’ intonation contours, and comparison of these contours with those of native speakers.

#22 KLAIR: a virtual infant for spoken language acquisition research [PDF] [Copy] [Kimi1]

Authors: Mark Huckvale ; Ian S. Howard ; Sascha Fagel

Recent research into the acquisition of spoken language has stressed the importance of learning through embodied linguistic interaction with caregivers rather than through passive observation. However the necessity of interaction makes experimental work into the simulation of infant speech acquisition difficult because of the technical complexity of building real-time embodied systems. In this paper we present KLAIR: a software toolkit for building simulations of spoken language acquisition through interactions with a virtual infant. The main part of KLAIR is a sensori-motor server that supplies a client machine learning application with a virtual infant on screen that can see, hear and speak. By encapsulating the real-time complexities of audio and video processing within a server that will run on a modern PC, we hope that KLAIR will encourage and facilitate more experimental research into spoken language acquisition through interaction.

#23 An articulatory analysis of phonological transfer using real-time MRI [PDF] [Copy] [Kimi]

Authors: Joseph Tepperman ; Erik Bresch ; Yoon-Chul Kim ; Sungbok Lee ; Louis Goldstein ; Shrikanth S. Narayanan

Phonological transfer is the influence of a first language on phonological variations made when speaking a second language. With automatic pronunciation assessment applications in mind, this study intends to uncover evidence of phonological transfer in terms of articulation. Real-time MRI videos from three German speakers of English and three native English speakers are compared to uncover the influence of German consonants on close English consonants not found in German. Results show that nonnative speakers demonstrate the effects of L1 transfer through the absence of articulatory contrasts seen in native speakers, while still maintaining minimal articulatory contrasts that are necessary for automatic detection of pronunciation errors, encouraging the further use of articulatory models for speech error characterization and detection.

#24 Do multiple caregivers speed up language acquisition? [PDF] [Copy] [Kimi1]

Authors: L. ten Bosch ; Okko Johannes Räsänen ; Joris Driesen ; Guillaume Aimetti ; Toomas Altosaar ; Lou Boves ; A. Corns

In this paper we compare three different implementations of language learning to investigate the issue of speaker-dependent initial representations and subsequent generalization. These implementations are used in a comprehensive model of language acquisition under development in the FP6 FET project ACORNS. All algorithms are embedded in a cognitively and ecologically plausible framework, and perform the task of detecting word-like units without any lexical, phonetic, or phonological information. The results show that the computational approaches differ with respect to the extent they deal with unseen speakers, and how generalization depends on the variation observed during training.

#25 Automatic syllabification for danish text-to-speech systems [PDF] [Copy] [Kimi1]

Authors: Jeppe Beck ; Daniela Braga ; João Nogueira ; Miguel Sales Dias ; Luis Coelho

In this paper, a rule-based automatic syllabifier for Danish is described using the Maximal Onset Principle. Prior success rates of rule-based methods applied to Portuguese and Catalan syllabification modules were on the basis of this work. The system was implemented and tested using a very small set of rules. The results gave rise to 96.9% and 98.7% of word accuracy rate, contrary to our initial expectations, being Danish a language with a complex syllabic structure and thus difficult to be rule-driven. Comparison with data-driven syllabification system using artificial neural networks showed a higher accuracy rate of the former system.